Approximation properties of deep ReLU CNNs

نویسندگان

چکیده

This paper focuses on establishing $$L^2$$ approximation properties for deep ReLU convolutional neural networks (CNNs) in two-dimensional space. The analysis is based a decomposition theorem kernels with large spatial size and multi-channels. Given the result, property of activation function, specific structure channels, universal CNNs classic obtained by showing its connection one-hidden-layer (NNs). Furthermore, are one version ResNet, pre-act MgNet architecture connections between these networks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training Better CNNs Requires to Rethink ReLU

With the rapid development of Deep Convolutional Neural Networks (DCNNs), numerous works focus on designing better network architectures (i.e., AlexNet, VGG, Inception, ResNet and DenseNet etc.). Nevertheless, all these networks have the same characteristic: each convolutional layer is followed by an activation layer, a Rectified Linear Unit (ReLU) layer is the most used among them. In this wor...

متن کامل

Optimal approximation of continuous functions by very deep ReLU networks

We prove that deep ReLU neural networks with conventional fully-connected architectures with W weights can approximate continuous ν-variate functions f with uniform error not exceeding aνωf (cνW −2/ν), where ωf is the modulus of continuity of f and aν , cν are some ν-dependent constants. This bound is tight. Our construction is inherently deep and nonlinear: the obtained approximation rate cann...

متن کامل

Deep Learning using Rectified Linear Units (ReLU)

We introduce the use of rectified linear units (ReLU) as the classification function in a deep neural network (DNN). Conventionally, ReLU is used as an activation function in DNNs, with Softmax function as their classification function. However, there have been several studies on using a classification function other than Softmax, and this study is an addition to those. We accomplish this by ta...

متن کامل

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks—in terms of depth and number of weights—which is required for approximating classifier functions in an L-sense. As a model class, we consider the set E(R) of possibly discontinuous piecewise C functions f : [−1/2, 1/2] → R, where the different “smooth regions” of f are separated by C hypersurfaces. For given dimension d ≥ ...

متن کامل

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1] aribitrarily well? For ReLU nets near this minimal width, what can one say ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Research in the Mathematical Sciences

سال: 2022

ISSN: ['2522-0144', '2197-9847']

DOI: https://doi.org/10.1007/s40687-022-00336-0